Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 11 de 11
Filter
1.
Rev. Fac. Med. UNAM ; 66(6): 53-61, nov.-dic. 2023. tab, graf
Article in Spanish | LILACS-Express | LILACS | ID: biblio-1535226

ABSTRACT

Resumen La evaluación es un proceso sistemático que resulta en un juicio de valor para tomar decisiones. Los instrumentos empleados para obtener datos sobre el desempeño de los estudiantes requieren de un proceso sistemático y objetivo para su implementación. El mini-CEX es un instrumento de observación directa que ha sido empleado para la evaluación de la competencia clínica en los estudiantes de pre y posgrado desde su invención en 1955. Cuenta con diferentes evidencias de validez para su uso en distintos contextos educativos y clínicos. Permite realizar evaluaciones rápidas, acompañadas de realimentación y que proporcionan información relevante del desarrollo de la competencia clínica. El objetivo de este escrito es exponer la experiencia de la implementación del mini-CEX en el pregrado médico para la evaluación formativa de los estudiantes utilizando la simulación con pacientes estandarizados. Para lograr este objetivo se empleó la siguiente secuencia: búsqueda, planeación, integración y aplicación. Posterior a estos pasos se dan una serie de recomendaciones para la implementación del mini-CEX. Se concluye que la evaluación de la competencia clínica es importante para la mejora continua y permanente de los estudiantes de pre y posgrado. Es necesario sistematizar la evaluación ajustada siempre a objetivos y necesidades específicas de la evaluación.


Abstract Evaluation is a systematic process that results in a judgment to make decisions. The instruments used to obtain data on student performance require a systematic and objective process for their implementation. The mini-CEX is a direct observation tool that has been used for the evaluation of clinical competence in undergraduate and postgraduate students since its invention in 1955. It has different validity evidence for use in different educational and clinical contexts. It allows rapid evaluations, accompanied by feedback and providing relevant information on the development of clinical competence. The objective of this paper is to expose the experience of the implementation of the mini-CEX in the medical undergraduate for the formative evaluation of students using simulation with standardized patients. To achieve this goal, the following sequence was used: search, planning, integration, and application. After these steps we make some recommendations for the implementation of the mini-CEX. Its is concluded that the evaluation of clinical competence is important for the continuous and permanent improvement of undergraduate and graduate students. It is necessary to systematize the evaluation always adjusted to objectives and specific needs of the evaluation.

2.
Teach Learn Med ; : 1-10, 2023 Dec 18.
Article in English | MEDLINE | ID: mdl-38108266

ABSTRACT

Construct: High-stakes assessments measure several constructs, such as knowledge, competencies, and skills. In this case, validity evidence for test scores' uses and interpretations is of utmost importance, because of the consequences for everyone involved in their development and implementation. Background: Educational assessment requires an appropriate understanding and use of validity frameworks; however, health professions educators still struggle with the conceptual challenges of validity, and frequently validity analyses have a narrow focus. Important obstacles are the plurality of validity frameworks and the difficulty of grounding these abstract concepts in practice. Approach: We reviewed the validity frameworks literature to identify the main elements of frequently used models (Messick and Kane's) and proposed linking frameworks including Russell's recent overarching proposal. Examples are provided with commonly used assessment instruments in health professions education. Findings: Several elements in these frameworks can be integrated into a common approach, matching and aligning Messick's sources of validity with Kane's four inference types. Conclusions: This proposal to contribute evidence for assessment inferences may provide guidance to understanding the use of validity evidence in applied settings. The evolving field of validity research provides opportunities for its integration and practical use in health professions education.

3.
BMC Med Educ ; 22(1): 456, 2022 Jun 14.
Article in English | MEDLINE | ID: mdl-35701813

ABSTRACT

BACKGROUND: A large portion of prescribing errors can be attributed to deficiencies in medication knowledge. These errors are preventable and most often occur at the time of prescription. Antimicrobials are the drug class most common incorrectly prescribed. OBJECTIVE: To characterize the relationship between clinical competence and antibiotic prescription errors. We also investigated the frequency and severity of antibiotic prescription errors to identify items and attributes of clinical competence which are correlated with the antibiotic prescription error ratio. METHOD: A cross-sectional study was applied to assess clinical competence of junior medical residents in two reference academic hospitals and a regional hospital in Mexico City. It was conducted during February 2019. We used an infectious disease Objective Structured Clinical Examination (OSCE) to assess clinical competence and a measure of frequency, and severity of antibiotic prescription errors. RESULTS: The number of eligible participants was ~ 255 (hospital meeting attendance), and the number of residents in this study were 51 (~ 20%), 31 were female (60.8%). The mean OSCE score was 0.692 ± 0.073. The inter-item (Cronbach's alpha = 0.927) and inter-station internal consistency was adequate (Cronbach's alpha = 0.774). The G coefficient in generalizability theory analysis was 0.84. The antibiotic prescription error ratio was 45.1% ± 7%. The most frequent category of severity of antibiotic prescription errors was category E (errors that may contribute to or result in temporary harm to the patient and require intervention), 235 (65.2%). We observed a negative and significant correlation between clinical competence and antibiotic prescription errors (r = -0.33, p < 0.05, CI95% -0.57 to -0.07), which remained significant after controlling for the effect of gender and time since graduation from medical school (r = -0.39, p < 0.01, CI95% -0.625 to -0.118). Using exploratory factor analysis we identified two factors, which explained 69% of the variance in clinical competence, factor 1 evaluated socio-clinical skills and factor 2 evaluated diagnostic-therapeutic skills. Factor 2 was correlated with antibiotic prescription error ratio (r = -0.536, p < 0.001). CONCLUSIONS: We observed a negative correlation between clinical competence and antibiotic prescription error ratio in graduated physicians who have been accepted in a medical specialty. The therapeutic plan, which is a component of the clinical competence score, and the prescription skills had a negative correlation with antibiotic prescription errors. The most frequent errors in antibiotic prescriptions would require a second intervention.


Subject(s)
Clinical Competence , Internship and Residency , Anti-Bacterial Agents/therapeutic use , Cross-Sectional Studies , Drug Prescriptions , Female , Humans , Male
4.
Gac Med Mex ; 153(1): 6-15, 2017.
Article in Spanish | MEDLINE | ID: mdl-28128800

ABSTRACT

INTRODUCTION: Research on diagnostic and formative assessment competencies during undergraduate medical training is scarce in Latin America. OBJECTIVE: To assess the level of clinical competence of students at the beginning of their medical internship in a new curriculum. METHODS: This was an observational cross-sectional study in UNAM Faculty of Medicine students in Mexico City: a formative assessment of the second class of Curriculum 2010 students as part of the integral evaluation of the program. The assessment had two components: theoretical and practical. RESULTS: We assessed 577 students (65.5%) of the 880 total population that finished the 9th semester of Curriculum 2010. The written exam consisted of 232 items, with a mean of 61.0 ± 19.6, a difficulty index of 0.61, and Cronbach's alpha of 0.89. The mean of the objective structured clinical examination (OSCE) was 62.2 ± 16.8, with a mean Cronbach's alpha of 0.51. Results were analyzed by knowledge area and exam stations. CONCLUSIONS: The overall results provide evidence that students achieve sufficiently the competencies established in the curriculum at the beginning of the internship, that they have the necessary foundation for learning new and more complex information, and integrate it with existing knowledge to achieve significant learning and continue their training.


Subject(s)
Clinical Competence , Education, Medical, Undergraduate , Internship and Residency , Cross-Sectional Studies , Diagnosis
5.
Gac Med Mex ; 152(5): 439-443, 2016.
Article in Spanish | MEDLINE | ID: mdl-27792705

ABSTRACT

INTRODUCTION: Objective structured clinical examination is the instrument with more validated evidence to assess the degree of clinical competence of medical students. OBJECTIVES: To assess the degree of clinical competence of medical students at the end of their internship; to assess the reliability of the instruments with G theory. METHODS: This was an observational, longitudinal, and comparative study. The target population was composed of 5,399 interns of seven generations that finished their internship at the Faculty of Medicine of UNAM, between 2009 and 2015. The instrument used was 18 OSCE stations, three in each subject of the internship. RESULTS: The undergraduate medical interns show a sufficient degree of clinical competence to be general practitioners. The laboratory interpretation and physical examination had the highest scores. The interpretation of imaging studies was the component with the lowest score. The Family Medicine disciplinal area had the highest average score in the OSCE; in contrast, Pediatrics obtained the lowest score on average. The reliability was measured with Generalizability Theory and ranged between 0.81 and 0.93. CONCLUSIONS: The clinical competence of undergraduate medical interns is considered sufficient. The results also show the subjects that require educational interventions to improve clinical competence of students.


Subject(s)
Clinical Competence/standards , Internship and Residency/standards , Students, Medical , Cohort Studies , Educational Measurement/standards , Humans , Internship and Residency/statistics & numerical data , Longitudinal Studies , Mexico , Reproducibility of Results , Students, Medical/statistics & numerical data
6.
Med Educ Online ; 21: 31650, 2016.
Article in English | MEDLINE | ID: mdl-27543188

ABSTRACT

BACKGROUND: The objective structured clinical examination (OSCE) is a widely used method for assessing clinical competence in health sciences education. Studies using this method have shown evidence of validity and reliability. There are no published studies of OSCE reliability measurement with generalizability theory (G-theory) in Latin America. The aims of this study were to assess the reliability of an OSCE in medical students using G-theory and explore its usefulness for quality improvement. METHODS: An observational cross-sectional study was conducted at National Autonomous University of Mexico (UNAM) Faculty of Medicine in Mexico City. A total of 278 fifth-year medical students were assessed with an 18-station OSCE in a summative end-of-career final examination. There were four exam versions. G-theory with a crossover random effects design was used to identify the main sources of variance. Examiners, standardized patients, and cases were considered as a single facet of analysis. RESULTS: The exam was applied to 278 medical students. The OSCE had a generalizability coefficient of 0.93. The major components of variance were stations, students, and residual error. The sites and the versions of the tests had minimum variance. CONCLUSIONS: Our study achieved a G coefficient similar to that found in other reports, which is acceptable for summative tests. G-theory allows the estimation of the magnitude of multiple sources of error and helps decision makers to determine the number of stations, test versions, and examiners needed to obtain reliable measurements.


Subject(s)
Clinical Competence/standards , Education, Medical/methods , Educational Measurement/methods , Models, Theoretical , Cross-Sectional Studies , Humans , Mexico , Reproducibility of Results
7.
Gac Med Mex ; 150(1): 35-48, 2014.
Article in Spanish | MEDLINE | ID: mdl-24481430

ABSTRACT

INTRODUCTION: In Latin America there is almost no published information about knowledge retention and formative assessment of competencies in medical students, during medical school training and curricular changes. OBJECTIVE: To assess knowledge level and clinical competencies in medical students at the end of the second year in a new curriculum. METHODS: Observational, cross-sectional study in UNAM Faculty of Medicine students. A diagnostic evaluation was performed in the first class of the "Plan of Studies 2010" curriculum, as part of an integral program evaluation strategy. The assessment had two components: theoretical and practical. RESULTS: 456 (87%) of the 524 students that successfully completed the second year of Plan 2010 were assessed. The written test had 211 items, a mean score of 60 ± 14.5, mean difficulty index of 0.60, reliability with Cronbach's alpha of 0.85. The OSCE mean global score was 58 ± 9, Cronbach's alpha of 0.36, and G-coefficient of 0.48, and results were provided for each station. Results by area of knowledge, course, and station were reported. CONCLUSIONS: The results in general are acceptable, compared with previous written evaluations at the end of the second year, suggesting that the new program is achieving its educational goals. Competencies were formally assessed for the first time in our Institution, establishing a starting point for follow-up. The study provided useful information to the institution, teachers and students.


Subject(s)
Clinical Competence , Curriculum , Education, Medical , Cross-Sectional Studies , Education, Medical/standards , Mexico , Students, Medical
8.
Gac Med Mex ; 150(1): 8-17, 2014.
Article in Spanish | MEDLINE | ID: mdl-24481426

ABSTRACT

INTRODUCTION: The Objective Structured Clinical Examination (OSCE) is a widely used measurement tool to assess clinical competence in the health sciences. There is little published evidence of its use in Mexican medical schools. OBJECTIVE: To assess clinical competence in medical students with an OSCE, before and after the Medical Internship. METHODS: Prospective cohort study, pre- post-test research design. The assessed population was medical students at UNAM Faculty of Medicine in Mexico in their Internship year. The instrument was an 18-stations OSCE, three stations per academic area of the Internship curriculum. RESULTS: We assessed the clinical competence of 278 students in a pretest OSCE when starting the Internship year, and tested them 10 months later with an equivalent post-test OSCE. The sample of students was 30.4% of the total Internship population. Test reliability with Cronbach's alpha was 0.62 in the pre-test and 0.64 in the post-test. The global mean score in the pretest OSCE was 55.6 ± 6.6 and in the post-test 63.2 ± 5.7 (p < 0.001), with a Cohen's d of 1.2. CONCLUSIONS: The clinical competence of medical students measured with an OSCE is higher after the medical internship year. This difference suggests that the internship can influence the development of clinical competence in medical students.


Subject(s)
Clinical Competence , Internship and Residency , Female , Humans , Male , Mexico , Universities
9.
Ginecol Obstet Mex ; 70: 558-65, 2002 Nov.
Article in Spanish | MEDLINE | ID: mdl-12561706

ABSTRACT

OBJECTIVE: Assess the clinical competence in Gynecology and obstetrics to the Internship students of the Faculty of Medicine, UNAM. METHOD: The study design was descriptive, transverse type. We assessed 64 students, which had finished their gynecology field rotation with the objective structured clinical examination. The criteria to consider a competent performance level, was arbitrarily set up in 60%, both for individual problems and for the exam's global result. RESULTS: In 15 stations, the result was a 56.2 global average. The best performances were achieved in the following stations: take the pap smear (74.7), Pregnancy diagnostic (67.9), history of Gynecology and obstetrics (67.1), self examination of breast explanation (62.2) preclampsia (61.7) and cervicovaginitis (60). All the rest got a mark lower than 60. DISCUSSION: The results are lower than the ones obtained in written exams, because these cannot assess clinical skills. It could be observed that a student's performance in a clinical problem does not certainly predict his performance in other, so it seems to be determined more by the specific knowledge and the student's experience related to the case, than by a general problem-solving skill. CONCLUSIONS: The results show the advantages of this instrument to assess clinical skills, that justify its application in the formative process. This work evidences that its necessary to improve the acquisition of basic clinical skills trough systematic instructionals strategies and greater opportunities of learning.


Subject(s)
Clinical Competence/standards , Gynecology/education , Obstetrics/education , Cross-Sectional Studies , Gynecology/standards , Mexico , Obstetrics/standards
10.
Rev. Fac. Med. UNAM ; 41(3): 108-13, mayo-jun. 1998. tab
Article in Spanish | LILACS | ID: lil-234020

ABSTRACT

Con el propósito de determinar el nivel de competencia clínica con que inician el internado de pregrago, se practicó una evaluación diagnóstica a 31 alumnos del Hospital Gea González en enero de 1997. El instrumento utilizado fue el Examen Clínico Objetivo Estructurado, con 29 estaciones, las cuales fueron seleccionadas de acuerdo a los propblemas de salud más frecuentes en la práctica del médico general. Se obtuvo un medio global de 47.8 en las 29 estaciones, que se comparó con el promedio general obtenido por los alumnos en todas las asignaturas correspondientes a los primeros cuatro años de la carrera, el cual es de 90.1; se comprobó que existe correlación de 0.55 entre estos dos promedios mediante la prueba de Pearson (P < de 0.01). Los resultados obtenidos corroboraron las ventajas de este instrumento, que lo hace el idóneo para evaluar integralmente las capacidades clínicas y para detectar el grado de avance y las deficiencias en el desarrollo de éstas, lo que justifica su aplicación en el proceso formativo de los médicos


Subject(s)
Humans , Clinical Competence/statistics & numerical data , Competency-Based Education/methods , Education, Medical, Undergraduate/statistics & numerical data , Education, Medical, Undergraduate/methods , Education, Medical/statistics & numerical data , Program Evaluation/methods , Educational Measurement/statistics & numerical data , Educational Measurement/methods , Medicine
11.
Rev. méd. IMSS ; 36(1): 77-82, ene.-feb. 1998.
Article in Spanish | LILACS | ID: lil-243087

ABSTRACT

La evaluación, según se acepta actualmente, debe servir no solamente para la acreditación sino para identificar avances y deficiencias en la formación de los alumnos, y para modificar las estrategias de enseñanza hacia aspectos metodológicos que lleven a desarrollar las habilidades clínicas, aspecto fundamental del trabajo del médico. Como se sabe, la competencia clínica es una actividad compleja que comprende un conjunto de atributos multidimensionales, por lo que un sólo instrumento no puede evaluar adecuadamente su amplio rango de componetes. Este artículo tiene la finalidad de hacer algunas reflexiones sobre la importancia, dificultades y repercusiones que tiene la evaluación de la competencia clínica y promover el uso del examen clínico objetivo estructurado (OSCE), el cual ofrece múltiples ventajas y ha sido probado en varios países en donde es considerado el estado del arte de la evaluación de la competencia clínica


Subject(s)
Aptitude , Health Knowledge, Attitudes, Practice , Clinical Competence , /methods , Delivery of Health Care
SELECTION OF CITATIONS
SEARCH DETAIL
...